Arithmetic precision refers to the number of digits that are meaningful when expressing a value. In pure mathematics, the digits of a number can go on forever. In practical mathematics, computers can only handle a finite number of digits and measuring instruments can only measure a finite number of digits. The number of digits that are meaningful is the arithmetic precision. One says that there are 'n' significant digits.
Measurement and Arithmetic Precision
Measuring devices can only measure a finite number of digits. Figure 1 shows banana seeds being measured using a metric ruler. The smallest unit of measure on the ruler is millimeters. Each of the banana seeds measures approximately 16 millimeters. Since there are 2 digits in 16, the measurement is accurate to 2 significant digits. See also Measurement Error. |
Computation is necessarily limited to a finite number of digits. To compute , for example, to an infinite number of digits would take an infinite amount of memory and an infinite amount of time. So, when computing values, one chooses an arithmetic precision, or how accurate one wishes the computation to be. For computing devices that store numbers in binary format, such as most computers and many calculators, arithmetic precision is measured in binary digits. For computing devices that store numbers in decimal format, such as many calculators, arithmetic precision is measured in decimal digits.
# | A | B | C | D |
E | F | G | H | I |
J | K | L | M | N |
O | P | Q | R | S |
T | U | V | W | X |
Y | Z |
All Math Words Encyclopedia is a service of
Life is a Story Problem LLC.
Copyright © 2018 Life is a Story Problem LLC. All rights reserved.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License